Free Download Sign-up Form
* Email
First Name
* = Required Field


Mind Your Head Brain Training Book by Sue Stebbins and Carla Clark
New!
by Sue Stebbins &
Carla Clark

Paperback Edition

Kindle Edition

Are You Ready to Breakthrough to Freedom?
Find out
Take This Quiz

Business Breakthrough CDs

Over It Already

Amazing Clients
~ Ingrid Dikmen Financial Advisor, Senior Portfolio Manager


~ Mike M - Finance Professional

Social Media Sue Stebbins on Facebook

Visit Successwave's Blog!

Subscribe to the Successwaves RSS Feed

Images and Thinking

(Critique of arguments against images as a medium of thought)

David Cole

1 | 2 | 3| 4| 5 | 6 | 7

Page 2

A better model for mental imagery than a viewer looking at a photo comes from images in computers. This model has changed over the years - Tye quotes Ned Block as supposing computers are best suited to operating on sentential representations (Tye 1991 pp. 45-6). At the time Block was writing, it was generally true that computers either did number crunching or handled sentential input. Increasingly though computers process imagistic representations (originally, this was too costly - now inexpensive video game computers are largely very proficient image processors). This trend toward image processing will no doubt continue. These imagistic representations in computers take several forms. Some are bitmaps, where each quantum element of the image ("pixel") has a color and brightness value, and also various other less data intensive forms - vector graphics, and compressed images. [See Berkeley on "minimum sensible", e.g. Principles sec. 132 for early recognition of limits to visual resolution, quanta as elements of an image, as opposed to the mathematical continuum.] Commonly, computers are involved in image generation rather than comsumption, as in the game systems where images, and lots of them, are the output. But when a computer is connected to a video input device, the system may analyze the image - as with face recognition (developed for security applications), and more importantly with video camera sensors -- "eyes" --for robots. In these cases, the images are inputs to a causal computational system that controls (non-imaging) behavior -- that is, the images are not just used to produce images as output, as in an image enhancement or editing system, but to control other forms of behavior . In these cases the internal images are, as Tye puts it, functional images. They have causal properties (as when an inner image derived from a camera causes a robot to turn to avoid a visible obstruction, or a computer image of a face leads to output of a sentence naming the owner of the face) which are the same causal properties that a visible image could have when viewed by a human. But no viewing homunculus is required. Nevertheless, the causal process is, at least in part, the functional equivalent of a homunculus - and so, in that non-damaging way, there is a core of truth to the argument.

An image in a computer is similar to a latent image on photo film in that it is a non-propositional store of information. Just as the exposed film contains an array of discrete bits of information (in the form of chemical states in crystals of photosensitive material), so the computer contains a (logical rather than physical) array of discrete bits of information as electrical states of a silicon chip, or magnetic domains on a disk surface (the exact physical form will change as the state-of-the-art of computer long and short term storage evolves). But the latent image on film just sits there, inaccessible, until developed. The image in a computer is available for processing and analysis - and control, based on content, of further processes. In this respect it is more like a similarly invisible (nonoptical) image in the brain than it is like the latent photo image. No homuncular viewer is required because the brain image is in a viewer, a part of the causal process that constitutes the viewer.

(See Tye for a discussion of related psychological arguments from Ryle 1949, Concept of Mind.)

 

2. The Argument from Simplicity - we must posit propositional representations, and it is simpler psychology to posit only a single representation system (Kosslyn 1984 p. 8). If one can account for behavior on the basis of a single, propositional, representation system, one should not posit additional systems.

This preference for simplicity is certainly reasonable methodology in general. The problems with it in the present case are twofold: first, one can't account for the psychological facts, including subjects' reports or their experience, without seriously ad hoc explanations of the evidence. Second, the system we are dealing with is quite possibly a biological kludge - layers of neural systems to deal with particular info processing chores. The behavioral evidence is rife with revealing lapses from rationality, competence, efficiency. Particularly revealing to my mind are our notorious deficiencies at mathematics. Arithmetic is the one area where even simple digital electronic devices excel. An efficient representation system makes arithmetic calculations trivial. Yet humans struggle with multi-digit multiplication, and collapse under the slight demands of mental division. So: possibly one could build a system with a unified representational code; but there is no reason to believe we, as real live evolved systems, have a central nervous system governed by considerations of elegance and simplicity.

It is worth noting that in How the Mind Works, Steven Pinker presents considerations that count against viewing mental representations as all in a monolithic code. Pinker points out that the empirical evidence supports the view that there are at least four distinct forms of mental image: visual images, phonological images, grammatical representations, and mentalese (89-90). In the section that follows, titled "Why so many kinds of representations?", Pinker argues that it is generally more efficient to have modular organization (in brains as well as in good computer programming practice), and that modules serving different purposes will be best served by differing representation systems. Different representational systems lend themselves better to some tasks than to others - one needs the right data representation for the job at hand. True enough, but again, it is worth bearing in mind that efficiency may not be the ruling principle in a layered evolved system. Subsystems originally serving one purpose may be pressed into new roles, and efficiency yield to the mere fact that it is possible to perform the roles at all.

.

.

1 | 2 | 3| 4| 5 | 6 | 7

We Make it Easy to Succeed
Successwaves, Intl.
Brain Based Accelerated Success Audios

Successwaves Smart Coaching Audio